您现在的位置:首页 > 学术研究 > 论文发表 > An Optimal Scheme for Speeding the Training Process of the Asynchronous Federated Learning based on Model Partition
An Optimal Scheme for Speeding the Training Process of the Asynchronous Federated Learning based on Model Partition
[发布时间:2025-02-08  阅读次数: 214]

作者:Lei Shi,Ying Ren,Jing Xu,Yibin Xie,Chen Fang,Yuqi Fan发表刊物:Computing

年份:February 2025

摘要:Federated learning (FL) in an edge computing environment holds significant promise for enabling artificial intelligence at the network’s edge. Typically, FL requires clients to run complete Deep Neural Networks (DNNs), which can be a substantial burden for resource-constrained edge clients, potentially preventing them from completing tasks within the required timeframe. To address this, using model partition technique to execute parts of the DNN on clients can greatly enhance the applicability of FL. In this paper, we propose an optimal algorithm designed to reduce time latency through model partition technique for asynchronous FL. By utilizing the model partition technique, the DNN model is divided into two segments: one deployed on the client and the other on the edge server for model training. However, varying model partition points across different devices with differing transmission bandwidths can result in significant variations in time latency. So the most difficult part of our algorithm is to determine the suitable partition points for all devices. Initially, we establish a metric that correlates learning accuracy with iteration frequency. Using this metric, we construct the original mathematical model. Given the vast solution space of this model, which makes direct resolution impractical, we introduce an Optimal Bandwidth Allocation (OBA) algorithm to minimize total training time. The OBA algorithm operates by first filtering potential partition points based on network characteristics. It then selects suitable partition points tailored to different clients and bandwidth allocations, thereby achieving reduced training times. Simulation results demonstrate that our algorithm can decrease time latency by 18 to 64% compared to seven other methods.

参考文献拷贝字段:Lei Shi,Ying Ren, Jing Xu, Yibin Xie, Chen Fang, Yuqi Fan. An Optimal Scheme for Speeding the Training Process of the Asynchronous Federated Learning based on Model Partition [J]. Computing. 2025,107: 67.  DOI: https://doi.org/10.1007/s00607-025-01418-x


相关下载:
    An Optimal Scheme for Speeding the Training Process of the Asynchronous Federated Learning based on Model Partition